real indoor environment
PerspectiveNet: A Scene-consistent Image Generator for New View Synthesis in Real Indoor Environments
Given a set of a reference RGBD views of an indoor environment, and a new viewpoint, our goal is to predict the view from that location. Prior work on new-view generation has predominantly focused on significantly constrained scenarios, typically involving artificially rendered views of isolated CAD models. Here we tackle a much more challenging version of the problem. We devise an approach that exploits known geometric properties of the scene (per-frame camera extrinsics and depth) in order to warp reference views into the new ones. The defects in the generated views are handled by a novel RGBD inpainting network, PerspectiveNet, that is fine-tuned for a given scene in order to obtain images that are geometrically consistent with all the views in the scene camera system. Experiments conducted on the ScanNet and SceneNet datasets reveal performance superior to strong baselines.
Reviews: PerspectiveNet: A Scene-consistent Image Generator for New View Synthesis in Real Indoor Environments
Given few RGBD images of a real indoor scene as well as camera locations where these were taken, the algorithm predicts RGBD images takes from different camera locations. The novelty is the use of denoising auto-encoder for a given view and finding latent representations that are consistent for different views. Detailed comments: - It would be good if the whole process was described in steps because it wasn't clear what the overall approach is from the start (may be it would be for someone working on a similar topic). Some figures are good, but could be better - together with such description. Something like the following would be useful for me: A) We are given a set of RGBD views along with camera locations of a given scene.
PerspectiveNet: A Scene-consistent Image Generator for New View Synthesis in Real Indoor Environments
Given a set of a reference RGBD views of an indoor environment, and a new viewpoint, our goal is to predict the view from that location. Prior work on new-view generation has predominantly focused on significantly constrained scenarios, typically involving artificially rendered views of isolated CAD models. Here we tackle a much more challenging version of the problem. We devise an approach that exploits known geometric properties of the scene (per-frame camera extrinsics and depth) in order to warp reference views into the new ones. The defects in the generated views are handled by a novel RGBD inpainting network, PerspectiveNet, that is fine-tuned for a given scene in order to obtain images that are geometrically consistent with all the views in the scene camera system.
PerspectiveNet: A Scene-consistent Image Generator for New View Synthesis in Real Indoor Environments
Novotny, David, Graham, Ben, Reizenstein, Jeremy
Given a set of a reference RGBD views of an indoor environment, and a new viewpoint, our goal is to predict the view from that location. Prior work on new-view generation has predominantly focused on significantly constrained scenarios, typically involving artificially rendered views of isolated CAD models. Here we tackle a much more challenging version of the problem. We devise an approach that exploits known geometric properties of the scene (per-frame camera extrinsics and depth) in order to warp reference views into the new ones. The defects in the generated views are handled by a novel RGBD inpainting network, PerspectiveNet, that is fine-tuned for a given scene in order to obtain images that are geometrically consistent with all the views in the scene camera system.
Near-perfect point-goal navigation from 2.5 billion frames of experience
The AI community has a long-term goal of building intelligent machines that interact effectively with the physical world, and a key challenge is teaching these systems to navigate through complex, unfamiliar real-world environments to reach a specified destination -- without a preprovided map. We are announcing today that Facebook AI has created a new large-scale distributed reinforcement learning (RL) algorithm called DD-PPO, which has effectively solved the task of point-goal navigation using only an RGB-D camera, GPS, and compass data. Agents trained with DD-PPO (which stands for decentralized distributed proximal policy optimization) achieve nearly 100 percent success in a variety of virtual environments, such as houses and office buildings. We have also successfully tested our model with tasks in real-world physical settings using a LoCoBot and Facebook AI's PyRobot platform. An unfortunate fact about maps is that they become outdated the moment they are created.